skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Bruno, A"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Reservoir computing advances the intriguing idea that a nonlinear recurrent neural circuit—the reservoir—can encode spatio-temporal input signals to enable efficient ways to perform tasks like classification or regression. However, recently the idea of a monolithic reservoir network that simultaneously buffers input signals and expands them into nonlinear features has been challenged. A representation scheme in which memory buffer and expansion into higher-order polynomial features can be configured separately has been shown to significantly outperform traditional reservoir computing in prediction of multivariate time-series. Here we propose a configurable neuromorphic representation scheme that provides competitive performance on prediction, but with significantly better scaling properties than directly materializing higher-order features as in prior work. Our approach combines the use of randomized representations from traditional reservoir computing with mathematical principles for approximating polynomial kernels via such representations. While the memory buffer can be realized with standard reservoir networks, computing higher-order features requires networks of ‘Sigma-Pi’ neurons, i.e., neurons that enable both summation as well as multiplication of inputs. Finally, we provide an implementation of the memory buffer and Sigma-Pi networks on Loihi 2, an existing neuromorphic hardware platform. 
    more » « less
  2. Analysing a visual scene by inferring the configuration of a generative model is widely considered the most flexible and generalizable approach to scene understanding. Yet, one major problem is the computational challenge of the inference procedure, involving a combinatorial search across object identities and poses. Here we propose a neuromorphic solution exploiting three key concepts: (1) a computational framework based on vector symbolic architectures (VSAs) with complex-valued vectors, (2) the design of hierarchical resonator networks to factorize the non-commutative transforms translation and rotation in visual scenes and (3) the design of a multi-compartment spiking phasor neuron model for implementing complex-valued resonator networks on neuromorphic hardware. The VSA framework uses vector binding operations to form a generative image model in which binding acts as the equivariant operation for g eo me tric t ra nsformations. A scene can therefore be described as a sum of vector products, which can then be efficiently factorized by a resonator network to infer objects and their poses. The hierarchical resonator network features a partitioned architecture in which vector binding is equivariant for horizontal and vertical translation within one partition and for rotation and scaling within the other partition. The spiking neuron model allows mapping the resonator network onto efficient and low-power neuromorphic hardware. Our approach is demonstrated on synthetic scenes composed of simple two-dimensional shapes undergoing rigid geometric transformations and colour changes. A companion paper demonstrates the same approach in real-world application scenarios for machine vision and robotics. 
    more » « less
  3. Shedding light on 'through space' spin–spin coupling constants (SSCCs), this study challenges hydrogen bonding's dominance in JFH SSCC transmission on organofluorine compounds. Steric, substituent and solvent effects considerably alter SSCC pathways. 
    more » « less
  4. Abstract A prominent approach to solving combinatorial optimization problems on parallel hardware is Ising machines, i.e., hardware implementations of networks of interacting binary spin variables. Most Ising machines leverage second-order interactions although important classes of optimization problems, such as satisfiability problems, map more seamlessly to Ising networks with higher-order interactions. Here, we demonstrate that higher-order Ising machines can solve satisfiability problems more resource-efficiently in terms of the number of spin variables and their connections when compared to traditional second-order Ising machines. Further, our results show on a benchmark dataset of Booleank-satisfiability problems that higher-order Ising machines implemented with coupled oscillators rapidly find solutions that are better than second-order Ising machines, thus, improving the current state-of-the-art for Ising machines. 
    more » « less
  5. Large solar eruptions are often associated with long-duration γ-ray emission extending well above 100 MeV. While this phenomenon is known to be caused by high-energy ions interacting with the solar atmosphere, the underlying dominant acceleration process remains under debate. Potential mechanisms include continuous acceleration of particles trapped within large coronal loops or acceleration at coronal mass ejection (CME)-driven shocks, with subsequent back-propagation toward the Sun. As a test of the latter scenario, previous studies have explored the relationship between the inferred particle population producing the high-energy γ-rays and the population of solar energetic particles (SEPs) measured in situ. However, given the significant limitations on available observations, these estimates unavoidably rely on a number of assumptions. In an effort to better constrain theories of the γ-ray emission origin, we reexamine the calculation uncertainties and how they influence the comparison of these two proton populations. We show that, even accounting for conservative assumptions related to the γ-ray flare, SEP event, and interplanetary scattering modeling, their statistical relationship is only poorly/moderately significant. However, though the level of correlation is of interest, it does not provide conclusive evidence for or against a causal connection. The main result of this investigation is that the fraction of the shock-accelerated protons required to account for the γ-ray observations is >20%–40% for six of the 14 eruptions analyzed. Such high values argue against current CME-shock origin models, predicting a <2% back-precipitation; hence, the computed number of high-energy SEPs appears to be greatly insufficient to sustain the measured γ-ray emission. 
    more » « less
  6. We investigate the task of retrieving information from compositional distributed representations formed by hyperdimensional computing/vector symbolic architectures and present novel techniques that achieve new information rate bounds. First, we provide an overview of the decoding techniques that can be used to approach the retrieval task. The techniques are categorized into four groups. We then evaluate the considered techniques in several settings that involve, for example, inclusion of external noise and storage elements with reduced precision. In particular, we find that the decoding techniques from the sparse coding and compressed sensing literature (rarely used for hyperdimensional computing/vector symbolic architectures) are also well suited for decoding information from the compositional distributed representations.Combining these decoding techniqueswith interference cancellation ideas from communications improves previously reported bounds (Hersche et al., 2021) of the information rate of the distributed representations from 1.20 to 1.40 bits per dimension for smaller codebooks and from 0.60 to 1.26 bits per dimension for larger codebooks. 
    more » « less
  7. Vieira, Cristina (Ed.)
    Abstract Genome assemblies are growing at an exponential rate and have proved indispensable for studying evolution but the effort has been biased toward vertebrates and arthropods with a particular focus on insects. Onychophora or velvet worms are an ancient group of cryptic, soil dwelling worms noted for their unique mode of prey capture, biogeographic patterns, and diversity of reproductive strategies. They constitute a poorly understood phylum of exclusively terrestrial animals that is sister group to arthropods. Due to this phylogenetic position, they are crucial in understanding the origin of the largest phylum of animals. Despite their significance, there is a paucity of genomic resources for the phylum with only one highly fragmented and incomplete genome publicly available. Initial attempts at sequencing an onychophoran genome proved difficult due to its large genome size and high repeat content. However, leveraging recent advances in long-read sequencing technology, we present here the first annotated draft genome for the phylum. With a total size of 5.6Gb, the gigantism of the Epiperipatus broadwayi genome arises from having high repeat content, intron size inflation, and extensive gene family expansion. Additionally, we report a previously unknown diversity of onychophoran hemocyanins that suggests the diversification of copper-mediated oxygen carriers occurred independently in Onychophora after its split from Arthropoda, parallel to the independent diversification of hemocyanins in each of the main arthropod lineages. 
    more » « less
  8. Abstract We describe a stochastic, dynamical system capable of inference and learning in a probabilistic latent variable model. The most challenging problem in such models—sampling the posterior distribution over latent variables—is proposed to be solved by harnessing natural sources of stochasticity inherent in electronic and neural systems. We demonstrate this idea for a sparse coding model by deriving a continuous-time equation for inferring its latent variables via Langevin dynamics. The model parameters are learned by simultaneously evolving according to another continuous-time equation, thus bypassing the need for digital accumulators or a global clock. Moreover, we show that Langevin dynamics lead to an efficient procedure for sampling from the posterior distribution in the L0 sparse regime, where latent variables are encouraged to be set to zero as opposed to having a small L1 norm. This allows the model to properly incorporate the notion of sparsity rather than having to resort to a relaxed version of sparsity to make optimization tractable. Simulations of the proposed dynamical system on both synthetic and natural image data sets demonstrate that the model is capable of probabilistically correct inference, enabling learning of the dictionary as well as parameters of the prior. 
    more » « less